|
![](/i/fill.gif) |
I've made an excessive test last evening/night:
All of my Radiosity-Images I am currently uploading
to my website for Radiosity-Explanation and Experience
Sharing make use of a two pass method that originates
from Gilles Tran (hm, come to think about it, I realize
that I haven't mentioned that on my website yet...).
What it does is use an error_bound of 0.1 in the first
pass and save the data, then use an error_bound of
either 0.4 or 0.8 (depending on what is sufficient for
smooth radiosity-lighting) when loading it. To ensure
that these settings are used, low_error_factor is set
to 1, thus disabling a reset of error_bound during
pretrace/first pass.
Point is: Rendering a 640x480 image for both
passes required somewhat around half to one
hour, total.
Now, I figured that low_error_factor could be used
for the same: it lowers error_bound during the pretrace
and sets it back to normal on the final/actual trace.
So, setting error_bound to 0.8 and low_error_factor
to 0.125 I assumed I'd get an error_bound of 0.1 for
the pretraces (as 0.8 * 0.125 = 0.1), but render it at
error_bound 0.8.
And the results were catastrophic (not on image quality,
but rendering times): 14 hours and 15 minutes!!
And the results are the same.
Now, I'd like to know: does low_error_factor do
something else than what the documentation says,
or have I misunderstood it? I've read section
"6.11.11.2.7 low_error_factor" and it plainly says
that it drops error_bound on the pre-final passes
and sets it back on the final trace...
Any comments on that?
--
Tim Nikias v2.0
Homepage: http://www.digitaltwilight.de/no_lights
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.501 / Virus Database: 299 - Release Date: 14.07.2003
Post a reply to this message
|
![](/i/fill.gif) |